Strategies for handling short lived memory

  1. The libraries do internal free.

    1. Using arenas with scope guard :

      1. Shrinks itself:

        • When to shrink :

          1. Shrinks itself no matter what.

            • More akin to something stack-based.

          2. Shrinks itself if a condition is met, where it makes sense to free the memory; too much garbage was accumulated.

            • Not  stack-based.

        1. Free last Memory Block from   vmem.Arena -> .Growing  with vmem.Arena_Temp

          • Seems to shrinks it self, just like the runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD .

            • Tests :

              • arena_free_all()

                • Correctly shrinks the arena.

              • ARENA_TEMP_GUARD()

                • When wrapping the model_create s with this guard, the memory seems to have the same behavior as arena_free_all : spikes to 500mb, then reduce to 140~150mb after 3 secs.

          • Seems much cleaner than the runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD .

          • I'm unsure about the call to release ; seemed odd to me ON WINDOWS.

            • Linux seems ok and intuitive.

        2. Free last Memory Block from context.temp_allocator  with runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD

          • IF used with runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD , the context.temp_allocator  can shrink it self, apparently.

            • Tests :

              • runtime.DEFAULT_TEMP_ALLOCATOR_TEMP_GUARD .

                • Careful!! :

                  • If the context.temp_allocator  is wrapped with some other thing (tracking allocator or Tracy), then the guard is not  going to begin.

                    • The condition context.temp_allocator.data == &global_default_temp_allocator_data  fails.

          • ~It seems kinda weird tho, as the Memory_Block s are backed by the context.allocator .

            • The memory blocks can be backed by a different allocator (currently I'm using the _general_alloc , which is just a more explicit version of the context.allocator )

          • Thoughts :

            • Not a fan of this, as its behavior is really implicit with context.temp_allocator , but..... every core lib uses this, so.... aaaaaa....

        3. Rollback the offset from vmem.Arena -> .Static  with vmem.arena_static_reset_to  + shrink if a condition is met.

          • I would have to implement a guard with this, so I can store the offset I have to rollback, just like   mem.Arena_Temp_Memory .

          • When calling vmem.arena_static_reset_to , shrink if the condition is met.

            • Greater than the minimum size, and reserved - used  is greater or equal to sqrt(reserved) ; something like this.

          • This seems to be reaaally similar to a vmem.Arena -> Growing .

        4. mem.Arena_Temp_Memory  (Rollback the original Arena offset) + shrink if a condition is met.

          • When calling mem.end_arena_temp_memory , shrink if the condition is met.

            • Greater than the minimum size, and reserved - used  is greater or equal to sqrt(reserved) ; something like this.

          • I wouldn't use this one directly, as I'm using vmem.Arena , not mem.Arena .

          • This seems to be reaaally similar to a mem.Dynamic_Arena , I think.

      2. Doesn't shrink itself:

        1. Rollback the offset from vmem.Arena -> .Static  with vmem.arena_static_reset_to .

          • It doesn't shrink itself automatically on rollback.

          • I would have to implement a guard with this, so I can store the offset I have to rollback, just like   mem.Arena_Temp_Memory .

          • Starts to seem like a stack.

        2. ~ mem.Arena_Temp_Memory  (Rollback the original Arena offset)

          • It doesn't shrink itself automatically on rollback.

          • The vmem.arena_static_reset_to  for vmem.Arena  seems more convenient.

          • Starts to seem like a stack.

      • Characteristics :

        • The user doesn't control the deallocations.

        • More akin to something stack-based:

          • This strategy would place more guard s internally, so this would enforce something more similar to a stack-based memory.

    2. ~General allocator, manually freeing after every alloc.

      • This doesn't seem compatible with the concept of "temp" or "garbage".

      • This is useful for something that lives for a undetermined amount of time, or not even in this case.

        • Even if spawning entities and deleting entities, an optimized arena could be used...

      1. context.allocator .

  2. The libraries don't do internal free.

    1. Using arenas with a scope guard :

      • Same options as 'Scope thing internal'.

      • Characteristics :

        • The user controls the deallocation.

          • guard s are only placed by the user, not internally by libraries.

        • When the user has access to the deallocation, it might be "too late", as we had the memory spike either way, as there was just too much garbage to clean up.

          • The only way to optimize this is if the user deconstructs the library implementation and places the guard  in the places it sees a best fit.

        • Not  stack-based.

          • Memory keeps existing until ending the guard , not after the stack scope of something internal is ended.

          • "Memory that doesn't make sense to exist, keeps existing until removed by the user".

            • This statement actually doesn't make sense as the scope is defined by the guard , and not by the stack scope of the functions.

    2. Using arenas without a scope guard :

      • Characteristics :

        • Free arbitrary.

        • The user chooses when to free the memory.

        • When the user has access to the deallocation, it might be "too late", as we had the memory spike either way, as there was just too much garbage to clean up.

          • The only way to optimize this is if the user deconstructs the library implementation and frees the arena in the places it sees a best fit.

      1. _temp_alloc

        • It's just my  version of the context.temp_allocator , acting explicitly, instead of implicitly through the context  system.

        • This fights against all libraries that uses the context.temp_allocator  implicitly, as both arenas would be doing the same thing; it's ugly.

      2. context.temp_allocator .

        • It doesn't shrink it self.

        • Huge spike because of this.